feat(RHOAIENG-50753): add LDAP user and group autocomplete for workspace sharing#805
feat(RHOAIENG-50753): add LDAP user and group autocomplete for workspace sharing#805
Conversation
This comment has been minimized.
This comment has been minimized.
a316335 to
18b453a
Compare
Ambient Code PlatformKubernetes-native AI automation platform that orchestrates agentic sessions through containerized microservices. Built with Go (backend, operator), NextJS + Shadcn (frontend), Python (runner), and Kubernetes CRDs.
Structure
Key Files
Session FlowCommandsmake build-all # Build all container images
make deploy # Deploy to cluster
make test # Run tests
make lint # Lint code
make kind-up # Start local Kind cluster
make test-e2e-local # Run E2E tests against KindPer-Component# Backend / Operator (Go)
cd components/backend && gofmt -l . && go vet ./... && golangci-lint run
cd components/operator && gofmt -l . && go vet ./... && golangci-lint run
# Frontend
cd components/frontend && npm run build # Must pass with 0 errors, 0 warnings
# Runner (Python)
cd components/runners/ambient-runner && uv venv && uv pip install -e .
# Docs
cd docs && npm run dev # http://localhost:4321Critical Context
Pre-commit HooksThe project uses the pre-commit framework to run linters locally before every commit. Configuration lives in Installmake setup-hooksWhat RunsOn every
On every
Run Manuallymake lint # All hooks, all files
pre-commit run gofmt-check --all-files # Single hook
pre-commit run --files path/to/file.go # Single fileSkip Hooksgit commit --no-verify # Skip pre-commit hooks
git push --no-verify # Skip pre-push hooksNotes
More InfoSee BOOKMARKS.md for architecture decisions, development context, code patterns, and component-specific guides. |
|
Claude Code Review SummaryThis PR adds LDAP-backed user and group autocomplete to the workspace sharing flow. The feature is well-scoped, optional-by-default (graceful degradation when unconfigured), and includes solid test coverage. The LDAP client uses proper filter escaping and input sanitization. A few issues need attention before merge: all three new handlers use the wrong variable for the auth nil check (deviating from the mandatory project pattern), and the frontend builds a custom combobox from scratch when a Shadcn equivalent exists. Issues by SeverityBlocker IssuesNone. Critical Issues1. Auth nil check uses wrong client variable in all 3 handlers Files: All three handlers discard _, k8sDyn := GetK8sClientsForRequest(c)
if k8sDyn == nil {The documented mandatory pattern across the entire codebase ( reqK8s, _ := GetK8sClientsForRequest(c)
if reqK8s == nil {
c.JSON(http.StatusUnauthorized, gin.H{"error": "Invalid or missing token"})
c.Abort()
return
}While functionally equivalent today (both return nil together), this breaks the established convention, hides the unused Violates: Major Issues2. Custom combobox dropdown built from scratch instead of Shadcn Command File: The component renders a bare Per The Shadcn 3. New LDAP connection opened per search, no pooling File: Each search dials a fresh TCP+TLS connection and closes it immediately via 4. LDAP_GROUP_BASE_DN env var not mapped in deployment manifest File:
Minor Issues5. k8sDyn obtained but never used for K8s operations File: The dynamic client is obtained and only checked for nil, never used for actual K8s operations. Once Critical issue 1 is fixed, simplify to 6. Component exceeds 200-line guideline File:
7. Trivial JSDoc comments on self-evident functions Files: Comments like Positive Highlights
Recommendations
|
18b453a to
134aa71
Compare
This comment has been minimized.
This comment has been minimized.
134aa71 to
5c38d48
Compare
|
Disabling TLS verification in production is intentional for now until we switch to an authenticated LDAP server. The risk of doing this is minimal since the cluster is for authenticated users only and the risk of MITM is low. |
This comment has been minimized.
This comment has been minimized.
5c38d48 to
2aad455
Compare
This comment has been minimized.
This comment has been minimized.
…ace sharing Integrate LDAP to provide autocomplete when sharing workspaces. The backend queries LDAP with substring matching on uid, givenName, and sn for users, and prefix matching on cn for groups. Results are cached in-memory with a 5-minute TTL. The frontend replaces the plain text input with an autocomplete dropdown that debounces queries and supports keyboard navigation. LDAP is optional and gracefully degrades to manual input when unconfigured or unreachable. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Martin Prpič <mprpic@redhat.com>
Register ldap.autocomplete.enabled in flags.json with the scope:workspace tag so it appears in the workspace Feature Flags UI. The backend LDAP endpoints check the flag via FeatureEnabled() (fail-closed), and the frontend uses useFlag() from the Unleash SDK. Workspace admins can toggle LDAP autocomplete per-workspace. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Martin Prpič <mprpic@redhat.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Martin Prpič <mprpic@redhat.com>
2aad455 to
ebc132a
Compare
WalkthroughAdds LDAP integration: backend LDAP client with caching and search, HTTP handlers, frontend proxy routes and React Query hooks, an LDAP autocomplete React component integrated into workspace sharing behind a feature flag, and Kubernetes ConfigMaps plus deployment env vars for LDAP configuration. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant UI as LDAPAutocomplete
participant ReactQuery as React Query
participant Frontend as Frontend Route
participant Backend as Backend Handler
participant Client as LDAP Client
participant LDAP as LDAP Server
User->>UI: Type query
UI->>UI: Debounce 300ms
UI->>ReactQuery: request search(query)
ReactQuery->>ReactQuery: check cache
alt cache hit
ReactQuery-->>UI: return cached results
else
ReactQuery->>Frontend: GET /api/ldap/...?q=query
Frontend->>Backend: forward request + headers
Backend->>Backend: auth & validate query
Backend->>Client: SearchUsers/SearchGroups(query)
Client->>Client: check local cache
alt cache hit
Client-->>Backend: return cached entries
else
Client->>LDAP: LDAP search with filter
LDAP-->>Client: entries
Client->>Client: parse & cache results
Client-->>Backend: return entries
end
Backend-->>Frontend: 200 + JSON
Frontend-->>ReactQuery: 200 + JSON
ReactQuery-->>UI: results
end
UI->>User: render suggestions
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
|
test connection |
Ambient Code PlatformKubernetes-native AI automation platform that orchestrates agentic sessions through containerized microservices. Built with Go (backend, operator), NextJS + Shadcn (frontend), Python (runner), and Kubernetes CRDs.
Structure
Key Files
Session FlowCommandsmake build-all # Build all container images
make deploy # Deploy to cluster
make test # Run tests
make lint # Lint code
make kind-up # Start local Kind cluster
make test-e2e-local # Run E2E tests against KindPer-Component# Backend / Operator (Go)
cd components/backend && gofmt -l . && go vet ./... && golangci-lint run
cd components/operator && gofmt -l . && go vet ./... && golangci-lint run
# Frontend
cd components/frontend && npm run build # Must pass with 0 errors, 0 warnings
# Runner (Python)
cd components/runners/ambient-runner && uv venv && uv pip install -e .
# Docs
cd docs && npm run dev # http://localhost:4321Critical Context
Pre-commit HooksThe project uses the pre-commit framework to run linters locally before every commit. Configuration lives in Installmake setup-hooksWhat RunsOn every
On every
Run Manuallymake lint # All hooks, all files
pre-commit run gofmt-check --all-files # Single hook
pre-commit run --files path/to/file.go # Single fileSkip Hooksgit commit --no-verify # Skip pre-commit hooks
git push --no-verify # Skip pre-push hooksNotes
Testing
More InfoSee BOOKMARKS.md for architecture decisions, development context, code patterns, and component-specific guides. |
There was a problem hiding this comment.
Actionable comments posted: 9
♻️ Duplicate comments (1)
components/manifests/overlays/production/ldap-config.yaml (1)
8-10:⚠️ Potential issue | 🟠 MajorDo not ship the production overlay with LDAPS verification disabled.
Line 10 sets
LDAP_SKIP_TLS_VERIFY: "true", which makes the backend trust spoofed LDAPS endpoints. Please wire in the corporate CA/trusted bundle and keep verification enabled before rolling this out in production.Minimal manifest change once the CA bundle is wired in
- LDAP_SKIP_TLS_VERIFY: "true" + LDAP_SKIP_TLS_VERIFY: "false"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/manifests/overlays/production/ldap-config.yaml` around lines 8 - 10, Replace the insecure LDAP_SKIP_TLS_VERIFY=true with TLS verification enabled and wire in the corporate CA bundle: add an env var like LDAP_CA_BUNDLE (or LDAP_TRUSTED_CA) and populate it from the corporate CA secret/configMap, ensure the deployment mounts the CA bundle into the pod and the LDAP client (code that reads LDAP_BASE_DN/LDAP_URL) is configured to use LDAP_CA_BUNDLE for TLS verification, and set LDAP_SKIP_TLS_VERIFY to "false" (or remove it) so LDAPS verification remains enabled before shipping production.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@components/backend/go.mod`:
- Line 39: Run the verification commands shown to check whether any Go source in
the backend imports github.com/go-ldap/ldap/v3; if it does, remove the //
indirect marker and promote github.com/go-ldap/ldap/v3 to a direct require in
go.mod by adding a require entry for that module (same version currently listed)
and then run go mod tidy to update the module graph; if no direct imports exist,
leave the dependency as indirect. Ensure you reference the module name
"github.com/go-ldap/ldap/v3" and update the go.mod require block accordingly,
then run Go tooling (go mod tidy) to validate and commit the change.
In `@components/backend/handlers/ldap.go`:
- Around line 37-44: Change all LDAP error responses to use HTTP 503 Service
Unavailable for consistency with external dependency failures: update the
handlers that call LDAPClient.SearchUsers and LDAPClient.SearchGroups to return
c.JSON(http.StatusServiceUnavailable, ...) on err (instead of 500), and modify
the GetLDAPUser handler to return http.StatusServiceUnavailable when LDAP lookup
fails; ensure the error log messages remain unchanged and only the HTTP status
code is adjusted.
- Around line 18-24: The error returned from GetK8sClientsForRequest is being
discarded in SearchLDAPUsers (and likewise in SearchLDAPGroups and GetLDAPUser);
modify each function to capture the returned error and log it before returning
the 401 response — e.g., call your existing logger (or c.Error()/log.Printf)
with a clear message like "GetK8sClientsForRequest failed" plus the error value
when reqK8s is nil; ensure you do this in the SearchLDAPUsers, SearchLDAPGroups,
and GetLDAPUser functions where GetK8sClientsForRequest is called.
In `@components/backend/ldap/client_test.go`:
- Around line 233-257: The TestCacheTTL test uses a very small 10ms margin
(client.cacheTTL = 50ms, Sleep 60ms) which can flake under CI; either increase
the margin in TestCacheTTL (e.g., set cacheTTL to 200ms and Sleep to 300ms) or
refactor Client to accept a clock/time provider (inject a timeNow/Clock used by
cacheGet/cacheSet expiration logic) so tests can use a fake clock—update the
Client struct (and its cacheGet/cacheSet expiration checks) to read from the
injected clock and adjust TestCacheTTL to use the fake clock or a larger sleep
margin to eliminate flakiness.
In `@components/backend/ldap/client.go`:
- Around line 96-120: The cache currently only evicts expired entries on access
(cacheGet) so c.cache (sync.Map) can grow unbounded; add a background janitor
goroutine that periodically ranges over c.cache, type-asserts values to
cacheEntry, and deletes entries whose expiresAt is before time.Now(), using
c.cache.Delete, and start this janitor when the Client is created (e.g. in the
constructor or an init method) and stop it when the Client is closed; reference
cacheGet, cacheSet, cacheEntry, c.cache and c.cacheTTL when implementing the
interval and eviction logic.
- Around line 318-326: sanitizeQuery currently truncates by bytes using
q[:maxQueryLength], which can split multi-byte UTF-8 characters; change
truncation to be rune-aware by converting q to a rune slice and trimming like:
runes := []rune(q); if len(runes) > maxQueryLength { q =
string(runes[:maxQueryLength]) }; keep the existing TrimSpace and Replace steps
and reference the sanitizeQuery function and maxQueryLength constant when making
this change.
In `@components/backend/main.go`:
- Around line 141-144: The code currently falls back to using LDAP_BASE_DN for
LDAP_GROUP_BASE_DN which causes group searches to run under the users OU; change
the logic around handlers.LDAPClient creation so that LDAP_GROUP_BASE_DN is not
defaulted to LDAP_BASE_DN — read LDAP_GROUP_BASE_DN via os.Getenv and if empty
pass an empty string (or a sentinel) to ldap.NewClient and ensure
group-autocomplete is disabled when LDAP_GROUP_BASE_DN is empty instead of
searching under LDAP_BASE_DN; update ldap.NewClient (and any code using its
groupBaseDN parameter) or the caller to treat an empty groupBaseDN as “groups
disabled” rather than falling back to getEnvOrDefault("LDAP_BASE_DN", ...) or
reusing ldapBaseDN.
In `@components/frontend/src/app/api/ldap/users/route.ts`:
- Around line 9-12: The fetch call that proxies to
`${BACKEND_URL}/ldap/users${queryString}` can hang indefinitely; wrap it with an
AbortController and a short timeout (e.g., 2–5s) so the request is aborted when
the timer elapses, pass controller.signal to fetch (alongside existing headers
and method), and clear the timeout on success; apply the same pattern to the
sibling LDAP proxy routes that use BACKEND_URL, queryString, headers, and the
route handler(s) to ensure graceful degradation.
In `@components/manifests/overlays/kind/ldap-config.yaml`:
- Around line 8-10: The manifest currently disables server TLS verification via
LDAP_SKIP_TLS_VERIFY: "true"; change LDAP_SKIP_TLS_VERIFY to "false" in the YAML
and instead provision the LDAP CA into the runtime trust store for the container
image (for example by adding the CA to the image or mounting a ConfigMap/Secret
into the container's system trust directory and running the appropriate
trust-update step) so that the LDAPS endpoint at LDAP_URL and base LDAP_BASE_DN
are validated; ensure any container startup hooks or Dockerfile layers invoke
the platform's CA import command (e.g., update-ca-trust/update-ca-certificates)
so verification succeeds without skipping TLS.
---
Duplicate comments:
In `@components/manifests/overlays/production/ldap-config.yaml`:
- Around line 8-10: Replace the insecure LDAP_SKIP_TLS_VERIFY=true with TLS
verification enabled and wire in the corporate CA bundle: add an env var like
LDAP_CA_BUNDLE (or LDAP_TRUSTED_CA) and populate it from the corporate CA
secret/configMap, ensure the deployment mounts the CA bundle into the pod and
the LDAP client (code that reads LDAP_BASE_DN/LDAP_URL) is configured to use
LDAP_CA_BUNDLE for TLS verification, and set LDAP_SKIP_TLS_VERIFY to "false" (or
remove it) so LDAPS verification remains enabled before shipping production.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: bdfceffd-15bd-4f4d-b7a3-09443dc23b7b
⛔ Files ignored due to path filters (1)
components/backend/go.sumis excluded by!**/*.sum
📒 Files selected for processing (24)
Makefilecomponents/backend/go.modcomponents/backend/handlers/ldap.gocomponents/backend/ldap/client.gocomponents/backend/ldap/client_test.gocomponents/backend/main.gocomponents/backend/routes.gocomponents/frontend/src/app/api/ldap/groups/route.tscomponents/frontend/src/app/api/ldap/users/[uid]/route.tscomponents/frontend/src/app/api/ldap/users/route.tscomponents/frontend/src/components/workspace-sections/_components/ldap-autocomplete.tsxcomponents/frontend/src/components/workspace-sections/sharing-section.tsxcomponents/frontend/src/services/api/index.tscomponents/frontend/src/services/api/ldap.tscomponents/frontend/src/services/queries/index.tscomponents/frontend/src/services/queries/use-ldap.tscomponents/manifests/base/backend-deployment.yamlcomponents/manifests/base/flags.jsoncomponents/manifests/overlays/kind/kustomization.yamlcomponents/manifests/overlays/kind/ldap-config.yamlcomponents/manifests/overlays/local-dev/kustomization.yamlcomponents/manifests/overlays/local-dev/ldap-config.yamlcomponents/manifests/overlays/production/kustomization.yamlcomponents/manifests/overlays/production/ldap-config.yaml
| github.com/gabriel-vasile/mimetype v1.4.9 // indirect | ||
| github.com/gin-contrib/sse v1.1.0 // indirect | ||
| github.com/go-asn1-ber/asn1-ber v1.5.8-0.20250403174932-29230038a667 // indirect | ||
| github.com/go-ldap/ldap/v3 v3.4.12 // indirect |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Promote github.com/go-ldap/ldap/v3 to a direct requirement if the new LDAP client imports it.
If any backend file imports this module directly, keeping Line 39 as // indirect just creates churn the next time go mod tidy runs. If it is only transitive, ignore this.
Verify whether the dependency is directly imported
#!/bin/bash
set -euo pipefail
rg -n --type=go 'github\.com/go-ldap/ldap/v3' components/backend
rg -n 'github\.com/go-ldap/ldap/v3' components/backend/go.modMinimal fix if the import is direct
- github.com/go-ldap/ldap/v3 v3.4.12 // indirect
+ github.com/go-ldap/ldap/v3 v3.4.12📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| github.com/go-ldap/ldap/v3 v3.4.12 // indirect | |
| github.com/go-ldap/ldap/v3 v3.4.12 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/backend/go.mod` at line 39, Run the verification commands shown to
check whether any Go source in the backend imports github.com/go-ldap/ldap/v3;
if it does, remove the // indirect marker and promote github.com/go-ldap/ldap/v3
to a direct require in go.mod by adding a require entry for that module (same
version currently listed) and then run go mod tidy to update the module graph;
if no direct imports exist, leave the dependency as indirect. Ensure you
reference the module name "github.com/go-ldap/ldap/v3" and update the go.mod
require block accordingly, then run Go tooling (go mod tidy) to validate and
commit the change.
| func SearchLDAPUsers(c *gin.Context) { | ||
| reqK8s, _ := GetK8sClientsForRequest(c) | ||
| if reqK8s == nil { | ||
| c.JSON(http.StatusUnauthorized, gin.H{"error": "Invalid or missing token"}) | ||
| c.Abort() | ||
| return | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider logging the discarded error from GetK8sClientsForRequest.
The error returned by GetK8sClientsForRequest is silently discarded. While returning 401 is correct, logging the error would aid debugging authentication issues.
♻️ Proposed fix to log auth errors
func SearchLDAPUsers(c *gin.Context) {
- reqK8s, _ := GetK8sClientsForRequest(c)
+ reqK8s, err := GetK8sClientsForRequest(c)
if reqK8s == nil {
+ if err != nil {
+ log.Printf("LDAP auth error: %v", err)
+ }
c.JSON(http.StatusUnauthorized, gin.H{"error": "Invalid or missing token"})
c.Abort()
return
}Apply the same pattern to SearchLDAPGroups and GetLDAPUser.
Also applies to: 48-54, 78-84
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/backend/handlers/ldap.go` around lines 18 - 24, The error returned
from GetK8sClientsForRequest is being discarded in SearchLDAPUsers (and likewise
in SearchLDAPGroups and GetLDAPUser); modify each function to capture the
returned error and log it before returning the 401 response — e.g., call your
existing logger (or c.Error()/log.Printf) with a clear message like
"GetK8sClientsForRequest failed" plus the error value when reqK8s is nil; ensure
you do this in the SearchLDAPUsers, SearchLDAPGroups, and GetLDAPUser functions
where GetK8sClientsForRequest is called.
| users, err := LDAPClient.SearchUsers(query) | ||
| if err != nil { | ||
| log.Printf("LDAP user search error for query %q: %v", query, err) | ||
| c.JSON(http.StatusServiceUnavailable, gin.H{"error": "LDAP search unavailable"}) | ||
| return | ||
| } | ||
|
|
||
| c.JSON(http.StatusOK, gin.H{"users": users}) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Inconsistent HTTP status codes for LDAP errors.
SearchUsers/SearchGroups return 503 Service Unavailable on LDAP errors, while GetLDAPUser returns 500 Internal Server Error. Consider aligning these for consistency—503 is more appropriate since LDAP is an external dependency.
♻️ Proposed fix for consistency
user, err := LDAPClient.GetUser(uid)
if err != nil {
log.Printf("LDAP user get error for uid %q: %v", uid, err)
- c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to look up user"})
+ c.JSON(http.StatusServiceUnavailable, gin.H{"error": "LDAP lookup unavailable"})
return
}Also applies to: 67-74, 97-102
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/backend/handlers/ldap.go` around lines 37 - 44, Change all LDAP
error responses to use HTTP 503 Service Unavailable for consistency with
external dependency failures: update the handlers that call
LDAPClient.SearchUsers and LDAPClient.SearchGroups to return
c.JSON(http.StatusServiceUnavailable, ...) on err (instead of 500), and modify
the GetLDAPUser handler to return http.StatusServiceUnavailable when LDAP lookup
fails; ensure the error log messages remain unchanged and only the HTTP status
code is adjusted.
| func TestCacheTTL(t *testing.T) { | ||
| client := &Client{ | ||
| cacheTTL: 50 * time.Millisecond, | ||
| } | ||
|
|
||
| client.cacheSet("test-key", "test-value") | ||
|
|
||
| // Should be in cache | ||
| val, ok := client.cacheGet("test-key") | ||
| if !ok { | ||
| t.Fatal("expected cache hit") | ||
| } | ||
| if val.(string) != "test-value" { | ||
| t.Errorf("expected 'test-value', got %v", val) | ||
| } | ||
|
|
||
| // Wait for expiry | ||
| time.Sleep(60 * time.Millisecond) | ||
|
|
||
| // Should be expired | ||
| _, ok = client.cacheGet("test-key") | ||
| if ok { | ||
| t.Fatal("expected cache miss after TTL") | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Cache TTL test may be flaky under CI load.
The 10ms margin between TTL (50ms) and sleep (60ms) could cause intermittent failures on slow/loaded CI runners. Consider using a time abstraction or increasing the margin.
♻️ Potential improvement for test stability
func TestCacheTTL(t *testing.T) {
client := &Client{
- cacheTTL: 50 * time.Millisecond,
+ cacheTTL: 20 * time.Millisecond,
}
client.cacheSet("test-key", "test-value")
// Should be in cache
val, ok := client.cacheGet("test-key")
if !ok {
t.Fatal("expected cache hit")
}
if val.(string) != "test-value" {
t.Errorf("expected 'test-value', got %v", val)
}
// Wait for expiry
- time.Sleep(60 * time.Millisecond)
+ time.Sleep(50 * time.Millisecond)
// Should be expired
_, ok = client.cacheGet("test-key")
if ok {
t.Fatal("expected cache miss after TTL")
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func TestCacheTTL(t *testing.T) { | |
| client := &Client{ | |
| cacheTTL: 50 * time.Millisecond, | |
| } | |
| client.cacheSet("test-key", "test-value") | |
| // Should be in cache | |
| val, ok := client.cacheGet("test-key") | |
| if !ok { | |
| t.Fatal("expected cache hit") | |
| } | |
| if val.(string) != "test-value" { | |
| t.Errorf("expected 'test-value', got %v", val) | |
| } | |
| // Wait for expiry | |
| time.Sleep(60 * time.Millisecond) | |
| // Should be expired | |
| _, ok = client.cacheGet("test-key") | |
| if ok { | |
| t.Fatal("expected cache miss after TTL") | |
| } | |
| } | |
| func TestCacheTTL(t *testing.T) { | |
| client := &Client{ | |
| cacheTTL: 20 * time.Millisecond, | |
| } | |
| client.cacheSet("test-key", "test-value") | |
| // Should be in cache | |
| val, ok := client.cacheGet("test-key") | |
| if !ok { | |
| t.Fatal("expected cache hit") | |
| } | |
| if val.(string) != "test-value" { | |
| t.Errorf("expected 'test-value', got %v", val) | |
| } | |
| // Wait for expiry | |
| time.Sleep(50 * time.Millisecond) | |
| // Should be expired | |
| _, ok = client.cacheGet("test-key") | |
| if ok { | |
| t.Fatal("expected cache miss after TTL") | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/backend/ldap/client_test.go` around lines 233 - 257, The
TestCacheTTL test uses a very small 10ms margin (client.cacheTTL = 50ms, Sleep
60ms) which can flake under CI; either increase the margin in TestCacheTTL
(e.g., set cacheTTL to 200ms and Sleep to 300ms) or refactor Client to accept a
clock/time provider (inject a timeNow/Clock used by cacheGet/cacheSet expiration
logic) so tests can use a fake clock—update the Client struct (and its
cacheGet/cacheSet expiration checks) to read from the injected clock and adjust
TestCacheTTL to use the fake clock or a larger sleep margin to eliminate
flakiness.
| // cacheGet returns a cached value if it exists and hasn't expired. | ||
| func (c *Client) cacheGet(key string) (any, bool) { | ||
| val, ok := c.cache.Load(key) | ||
| if !ok { | ||
| return nil, false | ||
| } | ||
| entry, ok := val.(cacheEntry) | ||
| if !ok { | ||
| c.cache.Delete(key) | ||
| return nil, false | ||
| } | ||
| if time.Now().After(entry.expiresAt) { | ||
| c.cache.Delete(key) | ||
| return nil, false | ||
| } | ||
| return entry.value, true | ||
| } | ||
|
|
||
| // cacheSet stores a value in the cache with TTL. | ||
| func (c *Client) cacheSet(key string, value any) { | ||
| c.cache.Store(key, cacheEntry{ | ||
| value: value, | ||
| expiresAt: time.Now().Add(c.cacheTTL), | ||
| }) | ||
| } |
There was a problem hiding this comment.
Cache entries are never proactively cleaned; memory may grow unbounded.
Expired entries are only removed on access (cacheGet). If unique queries accumulate without being re-accessed, the sync.Map grows indefinitely. Consider adding periodic cleanup or using a bounded LRU cache.
🛠️ Suggested approach
Options to address this:
- Add a background goroutine that periodically scans and evicts expired entries
- Use a bounded cache library like
hashicorp/golang-lruorpatrickmn/go-cache - Accept the risk if query cardinality is expected to be low
Given the 5-minute TTL and typical usage patterns, this may be acceptable for now, but worth monitoring memory usage in production.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/backend/ldap/client.go` around lines 96 - 120, The cache currently
only evicts expired entries on access (cacheGet) so c.cache (sync.Map) can grow
unbounded; add a background janitor goroutine that periodically ranges over
c.cache, type-asserts values to cacheEntry, and deletes entries whose expiresAt
is before time.Now(), using c.cache.Delete, and start this janitor when the
Client is created (e.g. in the constructor or an init method) and stop it when
the Client is closed; reference cacheGet, cacheSet, cacheEntry, c.cache and
c.cacheTTL when implementing the interval and eviction logic.
| // sanitizeQuery cleans and truncates a search query. | ||
| func sanitizeQuery(q string) string { | ||
| q = strings.TrimSpace(q) | ||
| q = strings.NewReplacer("\n", "", "\r", "").Replace(q) | ||
| if len(q) > maxQueryLength { | ||
| q = q[:maxQueryLength] | ||
| } | ||
| return q | ||
| } |
There was a problem hiding this comment.
UTF-8 truncation may split multi-byte characters.
q[:maxQueryLength] slices at byte position 50, which can split a multi-byte UTF-8 character (e.g., emoji or non-ASCII names), producing invalid UTF-8 and potentially unexpected LDAP filter behavior.
🔧 Proposed fix using rune-aware truncation
func sanitizeQuery(q string) string {
q = strings.TrimSpace(q)
q = strings.NewReplacer("\n", "", "\r", "").Replace(q)
- if len(q) > maxQueryLength {
- q = q[:maxQueryLength]
+ if len([]rune(q)) > maxQueryLength {
+ q = string([]rune(q)[:maxQueryLength])
}
return q
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // sanitizeQuery cleans and truncates a search query. | |
| func sanitizeQuery(q string) string { | |
| q = strings.TrimSpace(q) | |
| q = strings.NewReplacer("\n", "", "\r", "").Replace(q) | |
| if len(q) > maxQueryLength { | |
| q = q[:maxQueryLength] | |
| } | |
| return q | |
| } | |
| // sanitizeQuery cleans and truncates a search query. | |
| func sanitizeQuery(q string) string { | |
| q = strings.TrimSpace(q) | |
| q = strings.NewReplacer("\n", "", "\r", "").Replace(q) | |
| if len([]rune(q)) > maxQueryLength { | |
| q = string([]rune(q)[:maxQueryLength]) | |
| } | |
| return q | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/backend/ldap/client.go` around lines 318 - 326, sanitizeQuery
currently truncates by bytes using q[:maxQueryLength], which can split
multi-byte UTF-8 characters; change truncation to be rune-aware by converting q
to a rune slice and trimming like: runes := []rune(q); if len(runes) >
maxQueryLength { q = string(runes[:maxQueryLength]) }; keep the existing
TrimSpace and Replace steps and reference the sanitizeQuery function and
maxQueryLength constant when making this change.
| ldapBaseDN := getEnvOrDefault("LDAP_BASE_DN", "ou=users,dc=redhat,dc=com") | ||
| ldapGroupBaseDN := os.Getenv("LDAP_GROUP_BASE_DN") // optional, derived from LDAP_BASE_DN if empty | ||
| skipTLS := os.Getenv("LDAP_SKIP_TLS_VERIFY") == "true" | ||
| handlers.LDAPClient = ldap.NewClient(ldapURL, ldapBaseDN, ldapGroupBaseDN, skipTLS) |
There was a problem hiding this comment.
Don't reuse the users OU for group search.
When LDAP_GROUP_BASE_DN is unset, the client falls back to LDAP_BASE_DN. With the current overlays setting LDAP_BASE_DN to ou=users,..., /ldap/groups will search inside the users OU and typically return nothing. Require an explicit group base DN, or disable group autocomplete when it isn't configured.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/backend/main.go` around lines 141 - 144, The code currently falls
back to using LDAP_BASE_DN for LDAP_GROUP_BASE_DN which causes group searches to
run under the users OU; change the logic around handlers.LDAPClient creation so
that LDAP_GROUP_BASE_DN is not defaulted to LDAP_BASE_DN — read
LDAP_GROUP_BASE_DN via os.Getenv and if empty pass an empty string (or a
sentinel) to ldap.NewClient and ensure group-autocomplete is disabled when
LDAP_GROUP_BASE_DN is empty instead of searching under LDAP_BASE_DN; update
ldap.NewClient (and any code using its groupBaseDN parameter) or the caller to
treat an empty groupBaseDN as “groups disabled” rather than falling back to
getEnvOrDefault("LDAP_BASE_DN", ...) or reusing ldapBaseDN.
| const response = await fetch(`${BACKEND_URL}/ldap/users${queryString}`, { | ||
| method: 'GET', | ||
| headers, | ||
| }); |
There was a problem hiding this comment.
Bound the backend proxy call.
This route currently waits indefinitely for the backend/LDAP side. If that dependency hangs instead of erroring, the Next.js request thread hangs too and the graceful-degradation path never triggers. Add a short timeout/abort here, and mirror the same fix in the sibling LDAP proxy routes.
⏱️ Example fix
- const response = await fetch(`${BACKEND_URL}/ldap/users${queryString}`, {
- method: 'GET',
- headers,
- });
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), 5000);
+ let response: Response;
+ try {
+ response = await fetch(`${BACKEND_URL}/ldap/users${queryString}`, {
+ method: 'GET',
+ headers,
+ signal: controller.signal,
+ });
+ } finally {
+ clearTimeout(timeoutId);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const response = await fetch(`${BACKEND_URL}/ldap/users${queryString}`, { | |
| method: 'GET', | |
| headers, | |
| }); | |
| const controller = new AbortController(); | |
| const timeoutId = setTimeout(() => controller.abort(), 5000); | |
| let response: Response; | |
| try { | |
| response = await fetch(`${BACKEND_URL}/ldap/users${queryString}`, { | |
| method: 'GET', | |
| headers, | |
| signal: controller.signal, | |
| }); | |
| } finally { | |
| clearTimeout(timeoutId); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/frontend/src/app/api/ldap/users/route.ts` around lines 9 - 12, The
fetch call that proxies to `${BACKEND_URL}/ldap/users${queryString}` can hang
indefinitely; wrap it with an AbortController and a short timeout (e.g., 2–5s)
so the request is aborted when the timer elapses, pass controller.signal to
fetch (alongside existing headers and method), and clear the timeout on success;
apply the same pattern to the sibling LDAP proxy routes that use BACKEND_URL,
queryString, headers, and the route handler(s) to ensure graceful degradation.
| LDAP_URL: "ldaps://ldap.corp.redhat.com" | ||
| LDAP_BASE_DN: "ou=users,dc=redhat,dc=com" | ||
| LDAP_SKIP_TLS_VERIFY: "true" |
There was a problem hiding this comment.
Re-enable LDAPS certificate verification.
LDAP_SKIP_TLS_VERIFY: "true" disables server authentication for a real shared LDAP endpoint, so a network intermediary can impersonate the directory and feed forged autocomplete results into the sharing flow. If the cluster does not yet trust the issuing CA, ship that CA into the container trust store instead of turning verification off.
🧰 Tools
🪛 Checkov (3.2.334)
[low] 1-10: The default namespace should not be used
(CKV_K8S_21)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/manifests/overlays/kind/ldap-config.yaml` around lines 8 - 10, The
manifest currently disables server TLS verification via LDAP_SKIP_TLS_VERIFY:
"true"; change LDAP_SKIP_TLS_VERIFY to "false" in the YAML and instead provision
the LDAP CA into the runtime trust store for the container image (for example by
adding the CA to the image or mounting a ConfigMap/Secret into the container's
system trust directory and running the appropriate trust-update step) so that
the LDAPS endpoint at LDAP_URL and base LDAP_BASE_DN are validated; ensure any
container startup hooks or Dockerfile layers invoke the platform's CA import
command (e.g., update-ca-trust/update-ca-certificates) so verification succeeds
without skipping TLS.
The feature flag hook added in 513b5ae requires a QueryClientProvider, which the test harness doesn't supply. Mock it to unblock CI. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Martin Prpič <mprpic@redhat.com>
|
Review posted - see full analysis below in a follow-up comment. |
|
CRITICAL: TLS verification disabled in production LDAP config (components/manifests/overlays/production/ldap-config.yaml:9 sets LDAP_SKIP_TLS_VERIFY to true). This exposes the ldaps:// connection to MITM attacks. Fix: set to false or remove the key in production overlay. The dev overlays can keep it. Standard: security-standards.md. |
|
MAJOR: Custom autocomplete dropdown instead of Shadcn Combobox pattern. File: ldap-autocomplete.tsx. The dropdown uses raw ul/li elements with hand-rolled focus, keyboard navigation, and click-outside logic. Shadcn ships Command + Popover Combobox for exactly this use case. Standard violated: frontend-development.md Shadcn UI Components Only rule (FORBIDDEN: Creating custom UI components from scratch). Fix: replace with Popover, PopoverContent, PopoverTrigger, Command, CommandInput, CommandList, CommandItem, CommandEmpty. Bonus: the component is also 252 lines (limit is 200); switching to Shadcn Combobox will bring it under the limit. Standard: frontend-development.md. |
|
MINOR issues (4): (1) Inconsistent HTTP status codes: SearchLDAPUsers/Groups return 503 but GetLDAPUser returns 500 for the same failure class — standardise on 503 (handlers/ldap.go:104 vs :164). (2) Hardcoded org domain fallback in NewClient: the fallback ou=managedGroups,dc=redhat,dc=com bakes a Red Hat-specific domain into a generic library — log a warning instead (ldap/client.go:249). (3) useLDAPUser hook and ldapApi.getUser are exported but not used anywhere in this PR — remove or document intent. (4) autocomplete component has no error state: the useEffect checks isLoading/hasResults but not error, so LDAP query failures silently close the dropdown — render a brief message when error is set. |
|
POSITIVE HIGHLIGHTS: (1) Excellent LDAP injection prevention — goldap.EscapeFilter() used in all filter builders, sanitizeQuery() strips newlines, %q format verb throughout error logs. (2) Auth pattern correct throughout — every handler uses GetK8sClientsForRequest(c) with 401 on nil, per k8s-client-usage.md. (3) Graceful degradation — LDAP optional (LDAP_URL env gated), Unleash-flag gated in frontend, sharing form falls back to plain Input. (4) Cache TTL consistent end-to-end — backend 5 min, frontend staleTime 5 min. (5) Solid test coverage — filters, TTL expiry, sanitizeQuery, ParseGitHubUsername, short-query cases all tested. (6) LDAPResultSizeLimitExceeded treated as non-fatal partial result — correct for autocomplete. (7) Makefile Unleash password fix replaces hardcoded default with kubectl secret lookup. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@components/frontend/src/components/workspace-sections/__tests__/sharing-section.test.tsx`:
- Around line 29-31: The test currently forces useWorkspaceFlag() to always
return disabled which prevents rendering the LDAP-autocomplete branch in
SharingSection; change the mock so tests can control the flag per-case by
exporting/using a mutable mock function (e.g., vi.fn()) for useWorkspaceFlag and
set its return value inside individual tests or beforeEach/within specific test
cases; locate the vi.mock for use-feature-flags-admin and replace the fixed
return with a configurable mock (referencing useWorkspaceFlag and
SharingSection) so tests can call mockReturnValue or mockImplementation to
enable/disable the feature as needed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 2c6fddcf-913d-433a-9c5c-1bf1db00e25f
📒 Files selected for processing (1)
components/frontend/src/components/workspace-sections/__tests__/sharing-section.test.tsx
| vi.mock('@/services/queries/use-feature-flags-admin', () => ({ | ||
| useWorkspaceFlag: vi.fn(() => ({ enabled: false, isLoading: false })), | ||
| })); |
There was a problem hiding this comment.
Make the feature-flag mock configurable instead of forcing the fallback path.
This locks useWorkspaceFlag() to disabled for the whole suite, so these tests never render the new LDAP autocomplete branch in SharingSection. The PR’s main UI path can regress without this file catching it.
Suggested test setup
const mockAddMutate = vi.fn();
const mockRemoveMutate = vi.fn();
const mockRefetch = vi.fn();
+const mockUseWorkspaceFlag = vi.fn();
vi.mock('@/services/queries/use-feature-flags-admin', () => ({
- useWorkspaceFlag: vi.fn(() => ({ enabled: false, isLoading: false })),
+ useWorkspaceFlag: mockUseWorkspaceFlag,
}));
describe('SharingSection', () => {
beforeEach(() => {
vi.clearAllMocks();
+ mockUseWorkspaceFlag.mockReturnValue({ enabled: false, isLoading: false });
});
+
+ it('renders the autocomplete flow when the workspace flag is enabled', () => {
+ mockUseWorkspaceFlag.mockReturnValue({ enabled: true, isLoading: false });
+
+ render(<SharingSection projectName="test-project" />);
+ fireEvent.click(screen.getByText('Grant Permission'));
+
+ expect(screen.getByRole('combobox')).toBeDefined();
+ });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@components/frontend/src/components/workspace-sections/__tests__/sharing-section.test.tsx`
around lines 29 - 31, The test currently forces useWorkspaceFlag() to always
return disabled which prevents rendering the LDAP-autocomplete branch in
SharingSection; change the mock so tests can control the flag per-case by
exporting/using a mutable mock function (e.g., vi.fn()) for useWorkspaceFlag and
set its return value inside individual tests or beforeEach/within specific test
cases; locate the vi.mock for use-feature-flags-admin and replace the fixed
return with a configurable mock (referencing useWorkspaceFlag and
SharingSection) so tests can call mockReturnValue or mockImplementation to
enable/disable the feature as needed.
Claude Code ReviewSummaryThis PR adds LDAP-backed autocomplete for user and group selection in the workspace sharing dialog. The implementation is well-scoped: a new Go LDAP package with in-memory caching, three authenticated API endpoints, Next.js proxy routes, and a React Query-powered frontend. Graceful degradation (LDAP optional, feature-flagged) is handled correctly throughout. However, a production manifest with TLS verification disabled and a frontend component that bypasses Shadcn UI conventions need to be addressed. Issues by SeverityBlocker IssuesNone Critical Issues1. Production manifest ships with TLS verification disabled
InsecureSkipVerify: c.skipTLSVerify, //nolint:gosec // controlled by LDAP_SKIP_TLS_VERIFY env var for devSetting it Violates: 2.
The correct approach is Violates: Major Issues3. In-memory cache has no active eviction
go func() {
ticker := time.NewTicker(c.cacheTTL)
defer ticker.Stop()
for range ticker.C {
c.cache.Range(func(key, val any) bool {
if e, ok := val.(cacheEntry); ok && time.Now().After(e.expiresAt) {
c.cache.Delete(key)
}
return true
})
}
}()Violates: Minor Issues4.
Violates: 5. LDAP startup log reveals deployment configuration at INFO level
Positive Highlights
Recommendations
Review generated by Claude Code using repository standards from |
Review Queue — Blockers Found
|
Integrate LDAP to provide autocomplete when sharing workspaces. The backend queries LDAP with substring matching on uid, givenName, and sn for users, and prefix matching on cn for groups. Results are cached in-memory with a 5-minute TTL. The frontend replaces the plain text input with an autocomplete dropdown that debounces queries and supports keyboard navigation. LDAP is optional and gracefully degrades to manual input when unconfigured or unreachable.